Goto

Collaborating Authors

 autonomous intelligent system


Delegating Responsibilities to Intelligent Autonomous Systems: Challenges and Benefits

Dodig-Crnkovic, Gordana, Basti, Gianfranco, Holstein, Tobias

arXiv.org Artificial Intelligence

As AI systems increasingly operate with autonomy and adaptability, the traditional boundaries of moral responsibility in techno-social systems are being challenged. This paper explores the evolving discourse on the delegation of responsibilities to intelligent autonomous agents and the ethical implications of such practices. Synthesizing recent developments in AI ethics, including concepts of distributed responsibility and ethical AI by design, the paper proposes a functionalist perspective as a framework. This perspective views moral responsibility not as an individual trait but as a role within a socio-technical system, distributed among human and artificial agents. As an example of 'AI ethical by design,' we present Basti and Vitiello's implementation. They suggest that AI can act as artificial moral agents by learning ethical guidelines and using Deontic Higher-Order Logic to assess decisions ethically. Motivated by the possible speed and scale beyond human supervision and ethical implications, the paper argues for 'AI ethical by design', while acknowledging the distributed, shared, and dynamic nature of responsibility. This functionalist approach offers a practical framework for navigating the complexities of AI ethics in a rapidly evolving technological landscape.


A Measure for Level of Autonomy Based on Observable System Behavior

Pittman, Jason M.

arXiv.org Artificial Intelligence

Contemporary artificial intelligence systems are pivotal in enhancing human efficiency and safety across various domains. One such domain is autonomous systems, especially in automotive and defense use cases. Artificial intelligence brings learning and enhanced decision-making to autonomy system goal-oriented behaviors and human independence. However, the lack of clear understanding of autonomy system capabilities hampers human-machine or machine-machine interaction and interdiction. This necessitates varying degrees of human involvement for safety, accountability, and explainability purposes. Yet, measuring the level autonomous capability in an autonomous system presents a challenge. Two scales of measurement exist, yet measuring autonomy presupposes a variety of elements not available in the wild. This is why existing measures for level of autonomy are operationalized only during design or test and evaluation phases. No measure for level of autonomy based on observed system behavior exists at this time. To address this, we outline a potential measure for predicting level of autonomy using observable actions. We also present an algorithm incorporating the proposed measure. The measure and algorithm have significance to researchers and practitioners interested in a method to blind compare autonomous systems at runtime. Defense-based implementations are likewise possible because counter-autonomy depends on robust identification of autonomous systems.


Five Principles of Safe Driving in AIS (Autonomous Intelligent Systems) - DataScienceCentral.com

#artificialintelligence

In a recent article on Autonomous Intelligent Systems (AIS) [1], Ajit Joakar described various features and characteristics of such systems, including associated technologies and research areas, building blocks and core elements, critical factors for success, and cross-cutting enablers. He introduces AIS as an "emerging interdisciplinary field that deals with situations where humans interact with AI systems that are autonomous." From this, we see immediately the synergistic interaction between the intelligent system and its human users. While full autonomy suggests that the system can operate without human interaction, it is useful to leave open the opportunity (even the essential necessity) for human intervention, to provide mid-course corrections that keep the AIS on the right (and ethical) course. Another detailed description of AIS comes from a 2014 report on Autonomous Manufacturing [2].


Autonomous Intelligent Systems - DataScienceCentral.com

#artificialintelligence

Autonomous Intelligent Systems are AI software systems that act independently of direct human supervision, e.g., self-driving cars, UAVs, smart manufacturing robots, care robots for the elderly and virtual agents for training or support. Such systems need to be able to make safe, rational and human values-compatible decisions in unforeseen circumstances. Their decision making should be understandable by human users and collaborators, to ensure the necessary trust on behalf of the human users. There are multiple challenges in this area spanning both technology and ethics. It is also an interdisciplinary subject spanning science, humanities and social sciences.


Canada Protocol: an ethical checklist for the use of Artificial Intelligence in Suicide Prevention and Mental Health

Mörch, Carl-Maria, Gupta, Abhishek, Mishara, Brian L.

arXiv.org Artificial Intelligence

Introduction: To improve current public health strategies in suicide prevention and mental health, governments, researchers and private companies increasingly use information and communication technologies, and more specifically Artificial Intelligence and Big Data. These technologies are promising but raise ethical challenges rarely covered by current legal systems. It is essential to better identify, and prevent potential ethical risks. Objectives: The Canada Protocol - MHSP is a tool to guide and support professionals, users, and researchers using AI in mental health and suicide prevention. Methods: A checklist was constructed based upon ten international reports on AI and ethics and two guides on mental health and new technologies. 329 recommendations were identified, of which 43 were considered as applicable to Mental Health and AI. The checklist was validated, using a two round Delphi Consultation. Results: 16 experts participated in the first round of the Delphi Consultation and 8 participated in the second round. Of the original 43 items, 38 were retained. They concern five categories: "Description of the Autonomous Intelligent System" (n=8), "Privacy and Transparency" (n=8), "Security" (n=6), "Health-Related Risks" (n=8), "Biases" (n=8). The checklist was considered relevant by most users, and could need versions tailored to each category of target users.


Embedding Values into Autonomous Intelligent Systems

#artificialintelligence

Are you ready for this secret about the implementing values into AI systems? New breakthroughs allow scientists discovering methods that help intelligent systems gain an understanding of human values. When there is no values-based framework for artificial intelligence, then the biases will identify the code of human ethics. The risk of unclear global ethical standards is that expectation can prevent innovation for artificial intelligence. AI systems may solve world's problems, and manage the global economy.